Skip to content

feat(opencode): add dynamic model fetching for OpenAI-compatible provider#14277

Open
shivamashtikar wants to merge 5 commits intoanomalyco:devfrom
shivamashtikar:feat/dynamic-model-fetch
Open

feat(opencode): add dynamic model fetching for OpenAI-compatible provider#14277
shivamashtikar wants to merge 5 commits intoanomalyco:devfrom
shivamashtikar:feat/dynamic-model-fetch

Conversation

@shivamashtikar
Copy link

@shivamashtikar shivamashtikar commented Feb 19, 2026

Issue for this PR

Closes #12814
Closes #10633

Type of change

  • Bug fix
  • New feature
  • Refactor / code improvement
  • Documentation

What does this PR do?

Adds a shouldFetchModels config option that dynamically fetches available models from the provider's OpenAI-compatible API at startup. This eliminates the need to manually list every model in opencode.json for custom providers.

Comparison with PR #13896:
PR #13896 implements LiteLLM-specific model loading. This PR provides a generic implementation supporting any OpenAI-compatible provider (LiteLLM, Ollama, vLLM, custom proxies, etc.). Both try /model/info first, but this PR adds proper fallback to the standard /models endpoint.

How it works:

  • When a custom provider has baseURL configured, models are fetched in the background (unless shouldFetchModels: false)
  • First tries /model/info for rich metadata (limits, costs, capabilities)
  • Falls back to /models endpoint if /model/info unavailable
  • Manual config models always override fetched ones
  • Non-blocking - startup proceeds immediately, models merged when response arrives

Example:

{
  "provider": {
    "my-proxy": {
      "npm": "@ai-sdk/openai-compatible",
      "options": {
        "baseURL": "https://my-proxy.com/v1",
        "apiKey": "{env:API_KEY}"
      }
    }
  }
}

To disable:

{
  "provider": {
    "my-proxy": {
      "shouldFetchModels": false,
      "options": { "baseURL": "https://my-proxy.com/v1" }
    }
  }
}

How did you verify your code works?

  • bun test test/provider/fetch-models.test.ts - 4 tests pass
  • bun test test/session/llm.test.ts - 10 tests pass (no regressions)
  • Built standalone executable with --single flag and verified it works

Screenshots / recordings

N/A - no UI changes

Checklist

  • I have tested my changes locally
  • I have not included unrelated changes in this PR

@github-actions
Copy link
Contributor

The following comment was made by an LLM, it may be inaccurate:

Found a potential duplicate:

PR #13896 - feat(opencode): add auto loading models for litellm providers

Why it's related: This PR appears to implement very similar functionality - automatic model loading for LiteLLM providers. Given that PR #14277 also adds dynamic model fetching for OpenAI-compatible providers (including LiteLLM), these PRs may be addressing the same or overlapping feature requests. You should verify whether PR #13896 was already merged or closed and check if #14277 extends/improves upon it.

@shivamashtikar
Copy link
Author

this PR provides compatibility with any open-ai compatible endpoints along with litellm as opposed to #13896 which only handles for litellm

@PratikNarola1
Copy link

LGTM

@8x22b
Copy link

8x22b commented Feb 19, 2026

PLEASE MERGE IT

@espetro
Copy link

espetro commented Feb 20, 2026

up! we're looking forward to it!

@shivamashtikar shivamashtikar force-pushed the feat/dynamic-model-fetch branch 2 times, most recently from 6210e12 to 91ef011 Compare February 24, 2026 09:47
@shivamashtikar
Copy link
Author

shivamashtikar commented Feb 25, 2026

Hi @alexyaroshuk @adamdotdevin could you please check this PR and let me know if it can be merged or if you need any more changes in it

@alexyaroshuk
Copy link
Contributor

Hi @alexyaroshuk @adamdotdevin could you please check this PR and let me know if it can be merged or if you need any more changes in it

unit tests failing:

4 tests failed:
(fail) session.llm.stream > sends temperature, tokens, and reasoning options for openai-compatible models [31.00ms]
(fail) session.llm.stream > sends responses API payload for OpenAI models [63.00ms]
(fail) session.llm.stream > sends messages API payload for Anthropic models [94.00ms]
(fail) session.llm.stream > sends Google API payload for Gemini models [109.00ms]

I checked with upstream and confirmed that upstream does not have these failing. Your PR has these failing. You can verify by running bun test from packages/opencode

…iders

Add fetchModels config option that fetches available models from a
provider's API at startup. Tries LiteLLM /model/info first for rich
metadata (limits, costs, capabilities), falls back to standard /models
endpoint. Manually configured models override fetched ones.
…startup

Move dynamic model fetching to a fire-and-forget background task within
state(). Models are merged into the providers object asynchronously
after state initialization completes, so startup is not delayed by
network requests.
…to true

Rename config field to shouldFetchModels for clarity. Default to true
so custom providers with a baseURL automatically discover models
without explicit opt-in. Update PR docs accordingly.
@shivamashtikar shivamashtikar force-pushed the feat/dynamic-model-fetch branch from 584e91e to 66329b2 Compare March 2, 2026 22:50
…iders

- Add shouldFetchModels option to provider config (defaults to true)
- Update tests to disable model fetching in test fixtures
@shivamashtikar shivamashtikar force-pushed the feat/dynamic-model-fetch branch from 66329b2 to b1b97d6 Compare March 2, 2026 22:54
- Test shouldFetchModels: false prevents API calls
- Test manual models work when fetching is disabled
- Test blacklist/whitelist filtering with manual models
@shivamashtikar
Copy link
Author

shivamashtikar commented Mar 2, 2026

Hi @alexyaroshuk , apologies i missed those. I've fixed the test cases and and also added new test cases related to my change, can you please review and merge if everything seems fine

@github-actions github-actions bot added needs:compliance This means the issue will auto-close after 2 hours. and removed needs:compliance This means the issue will auto-close after 2 hours. labels Mar 2, 2026
@github-actions
Copy link
Contributor

github-actions bot commented Mar 2, 2026

Thanks for updating your PR! It now meets our contributing guidelines. 👍

@alexyaroshuk
Copy link
Contributor

Hi @alexyaroshuk , apologies i missed those. I've fixed the test cases and and also added new test cases related to my change, can you please review and merge if everything seems fine

i cannot merge, i simply try to help with review. Looks like the test issue is fixed, but merging is up to Adam

@shivamashtikar
Copy link
Author

Sure @alexyaroshuk please help with the review

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[FEATURE]: Dynamic model switching Can I let the opencode to select and switch different local models automatically?

5 participants